Low-light stereo image enhancement (LLSIE) is a relatively new task to enhance the quality of visually unpleasant stereo images captured in dark conditions. So far, very few studies on deep LLSIE have been explored due to certain challenging issues, i.e., the task has not been well addressed, and current methods clearly suffer from two shortages: 1) insufficient cross-view interaction; 2) lacking long-range dependency for intra-view learning. In this paper, we therefore propose a novel LLSIE model, termed \underline{Suf}ficient C\underline{r}oss-View \underline{In}teraction Network (SufrinNet). To be specific, we present sufficient inter-view interaction module (SIIM) to enhance the information exchange across views. SIIM not only discovers the cross-view correlations at different scales, but also explores the cross-scale information interaction. Besides, we present a spatial-channel information mining block (SIMB) for intra-view feature extraction, and the benefits are twofold. One is the long-range dependency capture to build spatial long-range relationship, and the other is expanded channel information refinement that enhances information flow in channel dimension. Extensive experiments on Flickr1024, KITTI 2012, KITTI 2015 and Middlebury datasets show that our method obtains better illumination adjustment and detail recovery, and achieves SOTA performance compared to other related methods. Our codes, datasets and models will be publicly available.
translated by 谷歌翻译
在许多图像引导的临床方法中,医学图像分割是一个基本和关键的步骤。基于深度学习的细分方法的最新成功通常取决于大量标记的数据,这特别困难且昂贵,尤其是在医学成像领域中,只有专家才能提供可靠和准确的注释。半监督学习已成为一种吸引人的策略,并广泛应用于医学图像分割任务,以训练注释有限的深层模型。在本文中,我们对最近提议的半监督学习方法进行了全面综述,并总结了技术新颖性和经验结果。此外,我们分析和讨论现有方法的局限性和几个未解决的问题。我们希望这篇评论可以激发研究界探索解决这一挑战的解决方案,并进一步促进医学图像细分领域的发展。
translated by 谷歌翻译
To obtain lower inference latency and less memory footprint of deep neural networks, model quantization has been widely employed in deep model deployment, by converting the floating points to low-precision integers. However, previous methods (such as quantization aware training and post training quantization) require original data for the fine-tuning or calibration of quantized model, which makes them inapplicable to the cases that original data are not accessed due to privacy or security. This gives birth to the data-free quantization method with synthetic data generation. While current data-free quantization methods still suffer from severe performance degradation when quantizing a model into lower bit, caused by the low inter-class separability of semantic features. To this end, we propose a new and effective data-free quantization method termed ClusterQ, which utilizes the feature distribution alignment for synthetic data generation. To obtain high inter-class separability of semantic features, we cluster and align the feature distribution statistics to imitate the distribution of real data, so that the performance degradation is alleviated. Moreover, we incorporate the diversity enhancement to solve class-wise mode collapse. We also employ the exponential moving average to update the centroid of each cluster for further feature distribution improvement. Extensive experiments based on different deep models (e.g., ResNet-18 and MobileNet-V2) over the ImageNet dataset demonstrate that our proposed ClusterQ model obtains state-of-the-art performance.
translated by 谷歌翻译
医学图像分割是许多临床方法的基本和关键步骤。半监督学习已被广​​泛应用于医学图像分割任务,因为它减轻了收购专家审查的注释的沉重负担,并利用了更容易获得的未标记数据的优势。虽然已被证明是通过实施不同分布下的预测的不变性的一致性学习,但现有方法无法充分利用来自未标记数据的区域级形状约束和边界级距离信息。在本文中,我们提出了一种新颖的不确定性引导的相互一致学习框架,通过将任务中的一致性学习与自组合和交叉任务一致性学习从任务级正则化的最新预测集成了任务内的一致性学习,从而有效地利用了未标记的数据利用几何形状信息。该框架是由模型的估计分割不确定性指导,以便为一致性学习选择相对某些预测,以便有效地利用来自未标记数据的更可靠的信息。我们在两个公开的基准数据集中广泛地验证了我们提出的方法:左心房分割(LA)数据集和大脑肿瘤分割(BRATS)数据集。实验结果表明,我们的方法通过利用未标记的数据和优于现有的半监督分段方法来实现性能增益。
translated by 谷歌翻译
一致性培训已被证明是一个先进的半监督框架,通过实施在不同意见的不同视图上的预测的不变性,实现了医学图像分割任务的有希望的结果。然而,随着模型参数的迭代更新,模型将倾向于达到耦合状态,最终失去利用未标记数据的能力。为了解决这个问题,我们提出了一种基于参数解耦策略的新型半监督分段模型,以鼓励来自不同视图的一致预测。具体地,我们首先采用双分支网络来同时为每个图像产生预测。在培训过程中,我们通过二次余弦距离与两个预测分支参数分离,以构建潜伏空间中的不同视图。基于此,特征提取器被约束以鼓励在多样化特征下由分类器生成的概率图的一致性。在整体训练过程中,特征提取器和分类器的参数通过一致性正则化操作和解耦操作来交替更新,以逐步提高模型的泛化性能。我们的方法在心房细分挑战数据集上实现了最先进的半监督方法,展示了我们框架的有效性。代码可在https://github.com/bx0903/pdc上获得。
translated by 谷歌翻译
核标准和沙滕 - $ p $ quasi-Norm是低级矩阵恢复中受欢迎的排名代理。不幸的是,计算张量的核标准或schatten-$ p $ quasi-Norm是NP-HARD,这是对低级数张量完成(LRTC)(LRTC)和张量稳定性主组件分析(TRPCA)的怜悯。在本文中,我们根据张量的CP组件向量的欧几里得规范提出了一类新的张量级正规化器,并表明这些正则化是张量schatten-$ p $ quasi-norm的单调转换。该连接使我们能够将LRTC和TRPCA中的Schatten-$ p $ quasi-norm降至最低。这些方法不使用奇异的值分解,因此可以对大张量进行比例。此外,这些方法对初始等级的选择不敏感,并且与核定标准相比,该方法为低量张量回收率提供了任意尖锐的等级代理。另一方面,我们使用Schatten-$ $ p $ quasi-norm正规化和LRTC研究了LRTC的概括能力。该定理表明,相对更清晰的正规化程序会导致更严格的误差绑定,这与我们的数值结果一致。合成数据和实际数据的数值结果证明了与基线方法相比,我们方法的有效性和优势。
translated by 谷歌翻译
降低降低技术旨在代表低维空间中的高维数据,以提取隐藏和有用的信息,或者促进对数据的视觉理解和解释。但是,很少有人考虑高维数据中隐含的潜在群集信息。在本文中,我们提出了基于T-SNE的新的图形非线性降低方法Laptsne,这是将高维数据视为2D散点图的最佳技术之一。具体而言,Laptsne在学习保留从高维空间到低维空间的局部和全球结构时,利用图形laplacian的特征值信息缩小了低维嵌入中的潜在簇。解决提出的模型是不平凡的,因为归一化对称拉普拉斯的特征值是决策变量的函数。我们提供了一种具有收敛保证的大型最小化算法,以解决LAPTSNE的优化问题,并显示如何分析梯度,当考虑使用Laplacian兼容的目标进行优化时,这可能引起人们的广泛关注。我们通过与最先进的方法进行正式比较,在视觉和既定的定量测量中评估我们的方法。结果证明了我们方法比T-SNE和UMAP等基线的优越性。我们还将方法扩展到光谱聚类并建立一种准确且无参数的聚类算法,该算法为我们提供了实际应用中的高可靠性和便利性。
translated by 谷歌翻译
这项工作为聚类提供了无监督的深入判别分析。该方法基于深层神经网络,旨在最大程度地减少群集内差异,并以无监督的方式最大化集群间差异。该方法能够将数据投射到具有紧凑和不同分布模式的非线性低维潜在空间中,以便可以有效地识别数据簇。我们进一步提供了该方法的扩展,以便可以有效利用可用的图形信息来提高聚类性能。带有或没有图形信息的图像和非图像数据的广泛数值结果证明了所提出的方法的有效性。
translated by 谷歌翻译
本文提出了弗兰克 - 沃尔夫(FW)的新变种​​,称为$ k $ fw。标准FW遭受缓慢的收敛性:迭代通常是Zig-zag作为更新方向振荡约束集的极端点。新变种,$ k $ fw,通过在每次迭代中使用两个更强的子问题oracelles克服了这个问题。第一个是$ k $线性优化Oracle($ k $ loo),计算$ k $最新的更新方向(而不是一个)。第二个是$ k $方向搜索($ k $ ds),最大限度地减少由$ k $最新更新方向和之前迭代表示的约束组的目标。当问题解决方案承认稀疏表示时,奥克斯都易于计算,而且$ k $ FW会迅速收敛,以便平滑凸起目标和几个有趣的约束集:$ k $ fw实现有限$ \ frac {4l_f ^ 3d ^} { \ Gamma \ Delta ^ 2} $融合在多台和集团规范球上,以及光谱和核规范球上的线性收敛。数值实验验证了$ k $ fw的有效性,并展示了现有方法的数量级加速。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译